5 research outputs found

    Data harmonization in PET imaging

    Get PDF
    Medical imaging physics has advanced a lot in recent years, providing clinicians and researchers with increasingly detailed images that are well suited to be analyzed with a quantitative approach typical of hard sciences, based on measurements and analysis of clinical interest quantities extracted from images themselves. Such an approach is placed in the context of quantitative imaging. The possibility of sharing data quickly, the development of machine learning and data mining techniques, the increasing availability of computational power and digital data storage which characterize this age constitute a great opportunity for quantitative imaging studies. The interest in large multicentric databases that gather images from single research centers is growing year after year. Big datasets offer very interesting research perspectives, primarily because they allow to increase statistical power of studies. At the same time, they raised a compatibility issue between data themselves. Indeed images acquired with different scanners and protocols could be very different about quality and measures extracted from images with different quality might be not compatible with each other. Harmonization techniques have been developed to circumvent this problem. Harmonization refers to all efforts to combine data from different sources and provide users with a comparable view of data from different studies. Harmonization can be done before acquiring data, by choosing a-priori appropriate acquisition protocols through a preliminary joint effort between research centers, or it can be done a-posteriori i.e. images are grouped into a single dataset and then any effects on measures caused by technical acquisition factors are removed. Although the a-priori harmonization guarantees best results, it is not often used for practical and/or technical reasons. In this thesis I will focus on a-posteriori harmonization. It is important to note that when we consider multicentric studies, in addition to the technical variability related to scanners and acquisition protocols, there may be a demographic variability that makes single centers samples not statistically equivalent to each other. The wide individual variability that characterize human beings, even more pronounced when patients are enrolled from very different geographical areas, can certainly exacerbate this issue. In addition, we must consider that biological processes are complex phenomena: quantitative imaging measures can be affected by numerous confounding demographic variables even apparently unrelated to measures themselves. A good harmonization method should be able to preserve inter-individual variability and remove at the same time all the effects due acquisition technical factors. Heterogene ity in acquisition together with a great inter-individual variability make harmonization very hard to achieve. Harmonization methods currently used in literature are able to preserve only the inter-subjects variability described by a set of known confounding variables, while all the unknown confounding variables are wrongly removed. This might lead to incorrect harmonization, especially if the unknown confounders play an important role. This issue is emphasized in practice, as sometimes happens that demographic variables that are known to play a major role are unknown. The final goal of my thesis is a proposal for an harmonization method developed in the context of amyloid Positron Emission Tomography (PET) which aim to remove the effects of variability induced by technical factors and at the same time are able to keep all the inter-individual differences. Since knowing all the demographic confounders is almost impossible, both practically and a theoretically, my proposal does not require the knowledge of these variables. The main point is to characterize image quality through a set of quality measures evaluated in regions of interest (ROIs) which are required to be as independent as possible from anatomical and clinical variability in order to exclusively highlight the effect of technical factors on images texture. Ideally, this allows to decouple the between-subjects variability from the technical ones: the latter can be directly removed while the former is automatically preserved. Specifically, I defined and validated 3 quality measures based on images texture properties. In addition I used a quality metric already existing, and I considered the reconstruction matrix dimension to take into account image resolution. My work has been performed using a multicentric dataset consisting of 1001 amyloid PET images. Before dealing specifically with harmonization, I handled some important issues: I built a relational database to organize and manage data and then I developed an automated algorithm for images pre-processing to achieve registration and quantification. This work might also be used in other imaging contexts: in particular I believe it could be applied in fluorodeoxyglucose (FDG) PET and tau PET. The consequences of harmonization I developed have been explored at a preliminary level. My proposal should be considered as a starting point as I mainly dealt with the issues of quality measures, while the harmonization of the variables in itself was done with a linear regression model. Although harmonization through linear models is often used, more sophisticated techniques are present in literature. It would be interesting to combine them with my work. Further investigations would be desirable in future

    Patterns of amyloid accumulation in amyloid-negative cases

    Full text link
    Amyloid staging models showed that regional abnormality occurs before global positivity. Several studies assumed that the trajectory of amyloid spread is homogeneous, but clinical evidence suggests that it is highly heterogeneous. We tested whether different amyloid-β (Aβ) patterns exist by applying clustering on negative scans and investigating their demographics, clinical, cognitive, and biomarkers correlates, and cognitive trajectories. 151 individuals from Geneva and Zurich cohorts with T1-MRI, negative Aβ positron emission tomography (PET,centiloid<12) and clinical assessment were included. N=123 underwent tau PET, and N=65 follow-up neuropsychological assessment. We performed k-means clustering using 33 Aβ regional Standardized Uptake Vales ratio. Demographics, clinical, cognitive, and biomarkers differences were investigated. Longitudinal cognitive changes by baseline cluster status were estimated using a linear mixed model. The cluster analysis identified two clusters: temporal predominant (TP) and cingulate predominant (CP). TP tau deposition was higher than CP. A trend for a higher cognitive decline in TP compared to CP was observed. This study suggests the existence of two Aβ deposition patterns in the earliest phases of Aβ accumulation, differently prone to tau pathology and cognitive decline

    Towards a New Proposal for the Time Delay in Gravitational Lensing

    No full text
    One application of the Cosmological Gravitational Lensing in General Relativity is the measurement of the Hubble constant H 0 using the time delay Δ t between multiple images of lensed quasars. This method has already been applied, obtaining a value of H 0 compatible with that obtained from the SNe 1A, but non-compatible with that obtained studying the anisotropies of the CMB. This difference could be a statistical fluctuation or an indication of new physics beyond the Standard Model of Cosmology, so it desirable to improve the precision of the measurements. At the current technological capabilities it is possible to obtain H 0 to a percent level uncertainty, so a more accurate theoretical model could be necessary in order to increase the precision about the determination of H 0 . The actual formula which relates Δ t with H 0 is approximated; in this paper we expose a proposal to go beyond the previous analysis and, within the context of a new model, we obtain a more precise formula than that present in the literature

    Analysis of the Angular Dependence of Time Delay in Gravitational Lensing

    Get PDF
    We consider an alternative formula for time delay in gravitational lensing. Imposing a smoothness condition on the gravitationally deformed paths followed by the photons from the source to the observer, we show that our formula displays the same degrees of freedom as the standard one. In addition to this, it is shown that the standard expression for time delay is recovered when small angles are involved. These two features strongly support the claim that the formula for time delay studied in this paper is the generalization to the arbitrary angles of the standard one, which is valid at small angles. This could therefore result in a useful tool in Astrophysics and Cosmology which may be applied to investigate the discrepancy between the various estimates of the Hubble constant. As an aside, two interesting consequences of our proposal for time delay are discussed: the existence of a constraint on the gravitational potential generated by the lens and a formula for the mass of the lens in the case of central potential
    corecore